Recently, great progress has been made in single-image super-resolution (SISR) based on deep learning technology. However, the existing methods usually require a large computational cost. Meanwhile, the activation function will cause some features of the intermediate layer to be lost. Therefore, it is a challenge to make the model lightweight while reducing the impact of intermediate feature loss on the reconstruction quality. In this paper, we propose a Feature Interaction Weighted Hybrid Network (FIWHN) to alleviate the above problem. Specifically, FIWHN consists of a series of novel Wide-residual Distillation Interaction Blocks (WDIB) as the backbone, where every third WDIBs form a Feature shuffle Weighted Group (FSWG) by mutual information mixing and fusion. In addition, to mitigate the adverse effects of intermediate feature loss on the reconstruction results, we introduced a well-designed Wide Convolutional Residual Weighting (WCRW) and Wide Identical Residual Weighting (WIRW) units in WDIB, and effectively cross-fused features of different finenesses through a Wide-residual Distillation Connection (WRDC) framework and a Self-Calibrating Fusion (SCF) unit. Finally, to complement the global features lacking in the CNN model, we introduced the Transformer into our model and explored a new way of combining the CNN and Transformer. Extensive quantitative and qualitative experiments on low-level and high-level tasks show that our proposed FIWHN can achieve a good balance between performance and efficiency, and is more conducive to downstream tasks to solve problems in low-pixel scenarios.
translated by 谷歌翻译
随着深度学习的发展,单图像超分辨率(SISR)取得了重大突破。最近,已经提出了基于全局特征交互的SISR网络性能的方法。但是,需要动态地忽略对上下文的响应的神经元的功能。为了解决这个问题,我们提出了一个轻巧的交叉障碍性推理网络(CFIN),这是一个由卷积神经网络(CNN)和变压器组成的混合网络。具体而言,一种新型的交叉磁场导向变压器(CFGT)旨在通过使用调制卷积内核与局部代表性语义信息结合来自适应修改网络权重。此外,提出了基于CNN的跨尺度信息聚合模块(CIAM),以使模型更好地专注于潜在的实用信息并提高变压器阶段的效率。广泛的实验表明,我们提出的CFIN是一种轻巧有效的SISR模型,可以在计算成本和模型性能之间达到良好的平衡。
translated by 谷歌翻译
随着视觉跟踪的快速进展,由于样品的冗余和当前跟踪器之间的缺点,现有的基准变得不那么富有信息,并对所有数据集进行评估非常耗时。因此,一个小型和信息的基准,涵盖了所有典型的具有挑战性的场景,以方便评估跟踪器性能,这是非常兴趣的。在这项工作中,我们开发了一个原则的方法来构建一个小型和信息的跟踪基准(ITB),其中7%的现有和新收集的数据集中的7%,这使得能够有效地评估,同时确保有效性。具体而言,我们首先设计了一种质量评估机制,以选择来自现有基准的最佳信息序列,以考虑到1)挑战水平,2)歧视强度,3)和外观变化的密度。此外,我们收集额外的序列,以确保跟踪方案的多样性和平衡,导致每个场景共20个序列。通过分析15次训练在同一数据的最先进的跟踪器的结果,我们确定每种情况下的稳健跟踪的有效方法,并对该领域的未来研究方向表现出新的挑战。
translated by 谷歌翻译
现代视频对象分割(VOS)算法以顺序处理顺序实现了显着高的性能,而目前目前普遍的管道仍然表现出一些显而易见的不足,如累积误差,未知的鲁棒性或缺乏适当的解释工具。在本文中,我们将半监控视频对象分割问题放入循环工作流程中,并通过半监控VOS系统的固有循环属性来找到上面的缺陷。首先,循环机制包含在标准顺序流程中的循环机制可以产生更一致的像素 - 方识的表示。依赖于起始帧中的准确参考掩码,我们表明可以减轻错误传播问题。接下来,自然地将离线循环管道扩展到在线方式的简单梯度校正模块,可以突出显示结果的高频率和详细部分,以进一步提高分割质量,同时保持可行的计算成本。同时,这种校正可以保护网络免受干扰信号产生的严重性能下降。最后,我们基于梯度校正过程开发周期有效的接收领域(周期ERF),以提供新的视角,分析特定于对象的感兴趣区域。我们对Davis16,Davis17和Youtube-Vos有挑战性的基准进行全面的比较和详细分析,表明循环机制有助于提高分割质量,提高VOS系统的稳健性,并进一步提供不同VOS算法的定性比较和解释工作。该项目的代码可以在https://github.com/lyxok1/stm-trings找到
translated by 谷歌翻译
虽然基于深度学习的跟踪方法取得了大量的进展,但它们需要大规模和高质量的注释数据,以进行足够的培训。为了消除昂贵和彻底的注释,我们研究自我监督的学习,以便进行视觉跟踪。在这项工作中,我们开发了作物变换粘贴操作,该操作能够通过在跟踪期间模拟各种外观变化来综合足够的训练数据,包括对象和背景干扰的外观变化。由于目标状态在所有合成数据中都是已知的,因此可以使用没有人为注释的合成数据在日常方式培训现有的深度跟踪器。所提出的目标感知数据综合方法在没有算法改变的情况下适应自我监督的学习框架内的现有跟踪方法。因此,所提出的自我监督学习机制可以无缝地集成到现有的跟踪框架中以进行培训。广泛的实验表明,我们的方法1)在有限注释下的案件下实现了对监督学习计划的有利性能; 2)有助于处理各种跟踪挑战,例如由于其可操纵性导致的物体变形,闭塞或背景杂波; 3)对最先进的无监督的跟踪方法表现有利; 4)提高各种最先进的监督学习框架的性能,包括SiamRPN ++,DIMP和Transt(基于变压器)。
translated by 谷歌翻译
时间序列的建模在各种应用中变得越来越重要。总体而言,数据通过遵循不同的模式而演变,这些模式通常是由不同的用户行为引起的。给定时间序列,我们定义了进化基因以捕获潜在用户行为,并描述行为如何导致时间序列的产生。特别是,我们提出了一个统一的框架,该框架通过学习分类器来识别片段的不同演化基因,并通过估计片段的分布来实现对抗发电机来实现进化基因。基于合成数据集和五个现实世界数据集的实验结果表明,我们的方法不仅可以实现良好的预测结果(例如,在F1方面 +10.56%),还可以提供结果的解释。
translated by 谷歌翻译
We address the problem of real-time 3D object detection from point clouds in the context of autonomous driving. Computation speed is critical as detection is a necessary component for safety. Existing approaches are, however, expensive in computation due to high dimensionality of point clouds. We utilize the 3D data more efficiently by representing the scene from the Bird's Eye View (BEV), and propose PIXOR, a proposal-free, single-stage detector that outputs oriented 3D object estimates decoded from pixelwise neural network predictions. The input representation, network architecture, and model optimization are especially designed to balance high accuracy and real-time efficiency. We validate PIXOR on two datasets: the KITTI BEV object detection benchmark, and a large-scale 3D vehicle detection benchmark. In both datasets we show that the proposed detector surpasses other state-of-the-art methods notably in terms of Average Precision (AP), while still runs at > 28 FPS.
translated by 谷歌翻译
In this paper, we study the use of deep Transformer translation model for the CCMT 2022 Chinese-Thai low-resource machine translation task. We first explore the experiment settings (including the number of BPE merge operations, dropout probability, embedding size, etc.) for the low-resource scenario with the 6-layer Transformer. Considering that increasing the number of layers also increases the regularization on new model parameters (dropout modules are also introduced when using more layers), we adopt the highest performance setting but increase the depth of the Transformer to 24 layers to obtain improved translation quality. Our work obtains the SOTA performance in the Chinese-to-Thai translation in the constrained evaluation.
translated by 谷歌翻译
Cooperative multi-agent reinforcement learning (c-MARL) is widely applied in safety-critical scenarios, thus the analysis of robustness for c-MARL models is profoundly important. However, robustness certification for c-MARLs has not yet been explored in the community. In this paper, we propose a novel certification method, which is the first work to leverage a scalable approach for c-MARLs to determine actions with guaranteed certified bounds. c-MARL certification poses two key challenges compared with single-agent systems: (i) the accumulated uncertainty as the number of agents increases; (ii) the potential lack of impact when changing the action of a single agent into a global team reward. These challenges prevent us from directly using existing algorithms. Hence, we employ the false discovery rate (FDR) controlling procedure considering the importance of each agent to certify per-state robustness and propose a tree-search-based algorithm to find a lower bound of the global reward under the minimal certified perturbation. As our method is general, it can also be applied in single-agent environments. We empirically show that our certification bounds are much tighter than state-of-the-art RL certification solutions. We also run experiments on two popular c-MARL algorithms: QMIX and VDN, in two different environments, with two and four agents. The experimental results show that our method produces meaningful guaranteed robustness for all models and environments. Our tool CertifyCMARL is available at https://github.com/TrustAI/CertifyCMA
translated by 谷歌翻译
Supervised approaches generally rely on majority-based labels. However, it is hard to achieve high agreement among annotators in subjective tasks such as hate speech detection. Existing neural network models principally regard labels as categorical variables, while ignoring the semantic information in diverse label texts. In this paper, we propose AnnoBERT, a first-of-its-kind architecture integrating annotator characteristics and label text with a transformer-based model to detect hate speech, with unique representations based on each annotator's characteristics via Collaborative Topic Regression (CTR) and integrate label text to enrich textual representations. During training, the model associates annotators with their label choices given a piece of text; during evaluation, when label information is not available, the model predicts the aggregated label given by the participating annotators by utilising the learnt association. The proposed approach displayed an advantage in detecting hate speech, especially in the minority class and edge cases with annotator disagreement. Improvement in the overall performance is the largest when the dataset is more label-imbalanced, suggesting its practical value in identifying real-world hate speech, as the volume of hate speech in-the-wild is extremely small on social media, when compared with normal (non-hate) speech. Through ablation studies, we show the relative contributions of annotator embeddings and label text to the model performance, and tested a range of alternative annotator embeddings and label text combinations.
translated by 谷歌翻译